-
Notifications
You must be signed in to change notification settings - Fork 7
Mateo/dev 25 write connecting arcade tools to your llm page #595
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mateo/dev 25 write connecting arcade tools to your llm page #595
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
| "content": tool_result, | ||
| }) | ||
|
|
||
| continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Missing assistant message before tool results in history
When the LLM returns tool calls, the code appends tool result messages to history but never appends the assistant message that contained the tool_calls. The OpenAI API (and compatible APIs like OpenRouter) requires the assistant message with tool_calls to appear in the conversation history before the corresponding tool result messages. This will cause an API error on the next iteration of the loop when the malformed history is sent back to the model.
Additional Locations (1)
nearestnabors
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need more openrouter tokens to actually test that it works, but wanted to share what I have so far!
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Show resolved
Hide resolved
Co-authored-by: RL "Nearest" Nabors <[email protected]>
Co-authored-by: RL "Nearest" Nabors <[email protected]>
|
|
||
| # Print the latest assistant response | ||
| assistant_response = history[-1]["content"] | ||
| print(f"\n🤖 Assistant: {assistant_response}\n") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Tool result shown as assistant response when max_turns exhausted
When invoke_llm exhausts max_turns while the assistant is still making tool calls, the function returns with a tool response as the last history item. The chat() function then accesses history[-1]["content"] and prints it prefixed with "🤖 Assistant:", displaying raw tool output as if it were the assistant's response. This produces confusing output when many consecutive tool calls are needed.
Additional Locations (1)
nearestnabors
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Show resolved
Hide resolved
Co-authored-by: vfanelle <[email protected]>
|
holding the merge until nav is merged |
|
|
||
| # Print the latest assistant response | ||
| assistant_response = history[-1]["content"] | ||
| print(f"\n🤖 Assistant: {assistant_response}\n") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Max turns exceeded causes tool result printed as response
Medium Severity
The chat() function assumes history[-1] is always an assistant message and prints its content as the assistant's response. However, when invoke_llm exits because max_turns is exceeded while still processing tool calls, history[-1] is a tool message with role: "tool". This causes the raw tool result (likely a JSON string) to be displayed to the user as "🤖 Assistant:" output, creating confusing behavior.
🔬 Verification Test
Why verification test was not possible: This edge case requires the LLM to continuously make tool calls until the max_turns limit (5) is reached without providing a final response. This is difficult to trigger reliably in testing as it depends on LLM behavior and requires actual API credentials. The logic flaw is evident from code inspection: when the while loop exits due to turns >= max_turns during tool processing, no assistant message is appended, yet chat() unconditionally treats history[-1]["content"] as the assistant response.
Additional Locations (1)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| // Handle all interrupts | ||
| const decisions: any[] = []; | ||
| for (const interrupt of interrupts) { | ||
| decisions.push(await handleAuthInterrupt(interrupt, rl)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function called with extra parameter it doesn't accept
Medium Severity
The handleAuthInterrupt function is defined with a single parameter (interrupt: Interrupt) at lines 297-321, but it's called with two parameters handleAuthInterrupt(interrupt, rl) at lines 414 and 718. The rl parameter (readline interface) is passed but never used, suggesting missing functionality in the function definition.
Additional Locations (1)
| useEffect(() => { | ||
| if (!posthog) { | ||
| return; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PostHog initialization removed but components still use it
High Severity
The app/_components/posthog.tsx file containing posthog.init() and PostHogProvider is completely deleted, but multiple components still import and use posthog directly from posthog-js. Without initialization, posthog.capture() calls will silently fail (no analytics), and posthog.onFeatureFlags() and posthog.getSurveys() in EarlyAccessRegistrySurvey will not work, breaking the survey functionality entirely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Style Review
Found 12 style suggestion(s).
Powered by Vale + Claude
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Outdated
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Outdated
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Outdated
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Outdated
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Outdated
Show resolved
Hide resolved
|
|
||
| ### Write a helper function that handles the LLM's invocation | ||
|
|
||
| There are many orchestration patterns that can be used to handle the LLM invocation. A common pattern is a ReAct architecture, where the user prompt will result in a loop of messages between the LLM and the tools, until the LLM provides a final response (no tool calls). This is the pattern we will implement in this example. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
write-good.ThereIs: Removed 'There are' and changed 'we will' to 'you will'
| There are many orchestration patterns that can be used to handle the LLM invocation. A common pattern is a ReAct architecture, where the user prompt will result in a loop of messages between the LLM and the tools, until the LLM provides a final response (no tool calls). This is the pattern we will implement in this example. | |
| Many orchestration patterns handle the LLM invocation. A common pattern is a ReAct architecture, where the user prompt will result in a loop of messages between the LLM and the tools, until the LLM provides a final response (no tool calls). This is the pattern you will implement in this example. |
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Outdated
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Outdated
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Outdated
Show resolved
Hide resolved
app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdx
Outdated
Show resolved
Hide resolved
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Style Review
Found 1 style suggestion(s).
Powered by Vale + Claude
|
|
||
| ### Write a helper function that handles the LLM's invocation | ||
|
|
||
| There are many orchestration patterns that can be used to handle the LLM invocation. A common pattern is a ReAct architecture, where the user prompt will result in a loop of messages between the LLM and the tools, until the LLM provides a final response (no tool calls). This is the pattern we will implement in this example. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
write-good.ThereIs: Removed 'There are' sentence starter and changed 'we will implement' to 'you will implement' per style guide
| There are many orchestration patterns that can be used to handle the LLM invocation. A common pattern is a ReAct architecture, where the user prompt will result in a loop of messages between the LLM and the tools, until the LLM provides a final response (no tool calls). This is the pattern we will implement in this example. | |
| Many orchestration patterns can handle the LLM invocation. A common pattern is a ReAct architecture, where the user prompt will result in a loop of messages between the LLM and the tools, until the LLM provides a final response (no tool calls). This is the pattern you will implement in this example. |


Preview here: https://docs-git-mateo-dev-25-write-connecting-arcade-a84eaa-arcade-ai.vercel.app/en/guides/agent-frameworks/setup-arcade-with-your-llm-python
Note
Introduces a concise Python guide for integrating Arcade tools with an LLM.
guides/agent-frameworks/setup-arcade-with-your-llm-python/page.mdxwith step-by-step setup: project init, env vars, retrieving formatted tool definitions, auth+execute helper, multi-turn ReAct-style loop (invoke_llm), and interactivechat()example using OpenRouter_meta.tsxpublic/llms.txtto index/link the new "Connect Arcade to your LLM" docWritten by Cursor Bugbot for commit 5f501dd. This will update automatically on new commits. Configure here.